专利摘要:
Method, and associated system, to display virtual reality information in a vehicle (1), where a virtual reality device (3) is used by a user (2) inside the vehicle (1), with the steps of i) capturing at least one image (5) of a user's environment (2), ii) classifying the at least one segment (52) of the at least one image (5) inside (11) of the vehicle (1), outside (12) of the vehicle (1), and additional object (13) of the vehicle (1), iii) modifying the at least one segment (52) based on the classification of the at least one segment (52), iv) generating a virtual environment of the user (2), and v) display the virtual environment generated by means of the virtual reality device (3). This shows virtual reality information in a vehicle, so you can modify this information to offer new features in the vehicle 1. (Machine-translation by Google Translate, not legally binding)
公开号:ES2704327A1
申请号:ES201731119
申请日:2017-09-15
公开日:2019-03-15
发明作者:Parejo Alejandro Moreno
申请人:SEAT SA;
IPC主号:
专利说明:

[0001]
[0002] Method and system to show virtual reality information in a vehicle
[0003]
[0004] OBJECT OF THE INVENTION
[0005]
[0006] The present patent application has as its object a method for displaying virtual reality information in a vehicle according to claim 1, which incorporates remarkable innovations and advantages, together with a system for displaying virtual reality information in a vehicle, according to claim 16.
[0007]
[0008] BACKGROUND OF THE INVENTION
[0009]
[0010] Current vehicles have cameras that capture images with information about the user's environment, both inside the interior space and outside of the vehicle. Said information, present in the user's environment, can be processed in order to improve its presentation through a virtual reality device, so that different functionalities are provided for the user. There is therefore a need for the user of a vehicle to perceive its environment in a filtered way, in which images and information are displayed, less accessible to their point of view, in a clear manner and according to their importance and priority at all times of driving.
[0011]
[0012] It is known from the state of the art, as reflected in document DE102016008231, a method for generating a virtual view of a vehicle, which has a plurality of cameras, and an image processing unit, which are superimposed on the field of vision of the head of a vehicle occupant. A virtual image is created in virtual reality glasses.
[0013]
[0014] In said document, the steps of i) capturing images of the exterior of the vehicle by external cameras, ii) processing said captured images, iii) superimposing the processed images on the occupant's field of view, and iv) projecting these images in a few places are described. virtual reality glasses. The factors that are taken into account for the previous stages are: the movement of the vehicle and / or exterior elements with respect to the vehicle, the position and orientation of glasses inside the vehicle, the capture of an interior camera that detects the movement of some markers of the glasses, processing images of a camera of the glasses themselves and values taken by the inertial sensors of the reality glasses virtual.
[0015]
[0016] It is noted that the generation of a virtual environment does not take into account the possibility that there are objects or intermediate elements present in the passenger compartment of the vehicle, which can be interposed between, for example, the user and a dashboard of the vehicle. vehicle, or between the user and the vehicle's windshield. Examples of said intermediate elements can be the hands and arms of the driver of the vehicle, a companion located inside the vehicle, interior decoration of the vehicle, objects such as money or credit cards that the user must manipulate while driving ... no detection and subsequent representation in the virtual environment of these intermediate elements can cause problems that the user does not know exactly what is touching, or how far it has until it contacts the screen, which can lead to inaccuracies or even damage in the passenger compartment of the vehicle, or in the driver himself.
[0017]
[0018] Thus, and in view of the foregoing, it is seen that there is still a need to have a method and system to display virtual reality information in a vehicle, such that it is possible to classify the parts of the image received by a camera placed in the vehicle. the point of view of the user so that he can modify this information to offer additional functionalities.
[0019]
[0020] DESCRIPTION OF THE INVENTION
[0021]
[0022] The present invention consists of a method and system for displaying virtual reality information in a vehicle, by means of a virtual reality device, in which images with relevant information are represented towards the user, framed in the visual field itself. Usually, the virtual reality device will consist of virtual reality glasses.
[0023]
[0024] Thus, the user, in order to enjoy the advantages of the present invention, has to put on virtual reality glasses, so that he visualizes the environment through the virtual reality images that are shown through said device. That is, the user does not directly visualizes the environment, but everything observes through, in this case, virtual reality glasses.
[0025]
[0026] The virtual reality device, in particular virtual reality glasses, comprise one or more cameras that capture and record the user's point of view. The method and system of the present invention shows the user the image coming from these cameras, only that, before the representation of the content captured and / or recorded by the camera or cameras, the system processes the image to provide a series of functionalities and features .
[0027]
[0028] In order to achieve this objective, the system divides the images captured in three parts or layers, in order to process the correct part or layer:
[0029] - the pixels of the image corresponding to the interior or interior of the vehicle, the interior coverings, the dashboard, the seats, the rear-view mirrors and other components that make up and delimiting the inside of a vehicle interior,
[0030] - the pixels of the image corresponding to the exterior of the passenger compartment, understood as the exterior of the vehicle interior, including any element that occurs outside the windows, windshield and / or rear window of the vehicle, and
[0031] - the pixels of the image corresponding to additional objects such as packets, passengers or the user himself.
[0032]
[0033] In a preferred embodiment of the invention, and for the system to provide the maximum of functionalities, the virtual reality device comprises the planes, layout or distribution of the interior of the vehicle. It also includes an accurate positioning and orientation system of the virtual reality device in space.
[0034]
[0035] In particular, the invention consists of a method for displaying virtual reality information in a vehicle, where a virtual reality device is used by a user inside the vehicle, where the method comprises the steps of:
[0036] i) capturing at least one image of a user's environment, where the at least one image captured coincides with a user's field of view, where the at least one image comprises at least one segment,
[0037] ii) classify the at least one segment of the at least one image in,
[0038] a) interior of the vehicle,
[0039] b) outside of the vehicle, and
[0040] c) additional object of the vehicle,
[0041] iii) modifying the at least one segment based on the classification of the at least one segment, iv) generating a virtual user environment, where the virtual environment comprises at least one virtual segment, where the at least one virtual segment generated is based on to the modified segment, and
[0042] v) show the virtual environment generated by means of the virtual reality device.
[0043]
[0044] In this way the method of the invention manages to separate the information captured through the camera (s) in order to show it by the virtual reality device, altering only a part of the information.
[0045]
[0046] This is done by changing the design of the image digitally, which implies a cost saving to arrive at the same aesthetic change than through real and non-virtual means. It also increases the possibilities of personalization of the environment that the user contemplates, and of the appearance that is shown both inside the interior and outside. The separation by layers allows that, when wanting to modify only the visual appearance of, for example, the interior of the vehicle, it is not modified wrongly or wrongly or additional objects located inside the vehicle (as passengers or an arm of the driver himself ) or the exterior of the vehicle.
[0047]
[0048] In particular, the method of the invention allows obtaining advantages such as:
[0049] - Modify the luminosity of the exterior images of the vehicle, offering an improvement of external visibility and, consequently, a greater safety in driving;
[0050] - Represent augmented reality in the images of the exterior of the vehicle;
[0051] - Remove areas inside the passenger compartment, such as a pillar or a door, in order to eliminate areas of low visibility and blind spots, replacing said images by those captured, for example, by external cameras, thus increasing the visibility of the driver;
[0052] - Change the design of interior zones of the vehicle without cost;
[0053] - Keep in the forefront intermediate objects between the user and the passenger compartment, such as other occupants of the vehicle, objects of the user, objects of the user's decoration ...
[0054]
[0055] Additionally, and thanks to dividing the image captured in the aforementioned three parts or layers, the functionality of accurately representing the part can be offered. corresponding to the passenger compartment with different designs without overlapping this projection with passengers or objects.
[0056]
[0057] By segment is meant a portion or part of an image. The subdivision of an image into portions or segments can be done by image processing, dividing the image by colors, volumes, geometries, contours ... thus obtaining the plurality of segments that make up the image. Alternatively, the subdivision of the image can be by pixels or another equivalent. Thus the term segment of an image is equivalent to the pixel of said image, being able to be replaced by one another, and interpreted both under the same meaning.
[0058]
[0059] Additionally, virtual environment is understood as the space that surrounds the user of a virtual reality system, generated from images captured from the real environment, and these images can be captured previously processed. In this way, just as a captured image of the environment comprises at least one segment, the virtual environment comprises at least one virtual segment, the virtual segment being understood as the subdivision, portion or part of the virtual environment.
[0060]
[0061] By capturing at least one image that coincides with a field of vision of the user, it is understood that said image is what the user would observe of not carrying a virtual reality device in front of his eyes, and / or any other obstacle that hinders his vision of reality in front of him.
[0062]
[0063] Specify that, according to the described method, the step of modifying the at least one segment based on the classification of the at least one segment comprises modifying one, two or all of said layers, so that the virtual environment generated includes some modification with respect to the real environment.
[0064]
[0065] According to another aspect of the invention, the step of classifying the at least one segment is based on a location of at least one element of the user's environment with respect to the virtual reality device, wherein the method additionally comprises the steps of: a) obtaining at least one real vector, wherein the at least one real vector comprises a module and an address between the virtual reality device and the at least one element of the user's environment,
[0066] b) determine a position of the virtual reality device in the vehicle,
[0067] c) assigning the at least one real vector to the at least one segment, and
[0068] d) comparing the at least one real vector with at least one theoretical vector, where the at least one theoretical vector is previously known.
[0069]
[0070] In this way it becomes possible to classify and label the at least one segment of the at least one image as the interior of the vehicle, exterior of the vehicle, or additional object of the vehicle, based on the comparison of the at least one real vector with the at least one. a theoretical vector.
[0071]
[0072] It should be mentioned that the at least one theoretical vector is based on an arrangement of the vehicle interior and the position of the virtual reality device in the vehicle, where the arrangement of the vehicle interior is previously known. In this way, the physical orientation of the virtual reality device in the vehicle can be determined.
[0073]
[0074] Point out that the arrangement of the vehicle interior is like a 3D map (three dimensions) of the interior, so that knowing the position of the virtual reality device, all the theoretical vectors are known between the virtual reality device and the at least one element of the environment. By element of the environment is meant any body or object that is located around the user. Thus, an element of the environment can be, for example, a windshield of the vehicle, a seat of the vehicle ... In this way, the distance and direction between a segment or pixel of the captured image with respect to a specific point of reference is known. . By way of example, the fixed reference point or origin of the real vector may be the objective of the at least one camera, where the camera captures the at least one image of the user's environment.
[0075]
[0076] On the other hand, the method comprises a further step of determining at least one window area in the at least one captured image, wherein the step of determining the at least one window area comprises recognizing at least one predefined geometric shape by means of processing and / or determining at least one marker of the at least one image, wherein the marker comprises a predefined color and / or comparing the at least one real vector with the at least one theoretical vector. In this way, the method allows to discriminate the window area of the image captured from the user's surroundings. The images of said zone can be attributed therefore to those coming from outside the vehicle. The window area refers to the glazed area of the vehicle, which protects the user from external elements while allowing them to visualize the surrounding environment. Both windshield, rear window and side windows make up the window area of the vehicle.
[0077] As previously mentioned, a map or distribution of the interior of the vehicle is previously known. If, as a result of the comparison of the real vector obtained with a known theoretical vector and assigned as a window area, that is, both angles and modules coincide, it can be concluded that it is the window area.
[0078]
[0079] For a preferred embodiment there may be means for facilitating the position of the windows or the window area, such as window frames painted with a predetermined color and determined by a step of processing the at least one image of the environment of the captured user.
[0080]
[0081] Thus, and advantageously, the at least one segment is classified as the exterior of the vehicle if the at least one segment is arranged in the at least one window area in the at least one image. In this way, and as mentioned, a correspondence can be established between at least one segment of the image, and the area of the window of the vehicle interior.
[0082]
[0083] According to another aspect of the invention, the at least one segment is classified as inside the vehicle if the real vector of the at least one segment is substantially the same as the theoretical vector, where both the angle and the modules of the vectors coincide. It is noted that in this case, the theoretical vector is not defined as a window area. In this way, the method allows to discriminate at least one segment of the image belonging to the interior of the vehicle interior.
[0084]
[0085] According to yet another aspect of the invention, the at least one segment is classified as an additional vehicle object if the actual vector module of the at least one segment is smaller than the theoretical vector module. It is emphasized that, whether the theoretical vector is defined as a window area or is not defined as a window area, in case the real vector module is lower than the theoretical vector module implies that at least one segment of the image belonging to an additional object present in the vehicle's interior.
[0086]
[0087] Note on the other hand that the step of determining a position of the virtual reality device in the vehicle comprises determining a location of the user's head by means of image processing and / or determining a location of at least one point of reference of the virtual reality device by means of image processing and / or determining a location of the virtual reality device by means of triangulation and / or determining a location of the virtual reality device by means of at least one inertial system arranged in the device of virtual reality.
[0088]
[0089] Additionally, the stage of displaying the virtual environment comprises representing a virtual field of vision substantially equal to the user's field of vision, where the virtual field of vision comprises the at least one virtual segment corresponding to the segment of the interior of the vehicle and / or the exterior of the vehicle and / or at least one additional object of the vehicle. Optionally, the virtual field of vision represented can also be greater than the field of vision of the user, so that it is possible to increase the field of vision of the user, favoring a safety in driving.
[0090]
[0091] In a preferred embodiment of the invention, the step of generating a virtual environment comprises overlaying the at least one virtual segment corresponding to the at least one additional object segment of the vehicle to the at least one inner segment of the vehicle, and comprises overlaying the at least one virtual segment corresponding to at least one additional object segment of the vehicle to the at least one outer segment of the vehicle.
[0092]
[0093] In this way the method allows the virtual representation, not only of the interior part of the vehicle interior, or only the exterior part of the vehicle, as they are captured by the at least one camera, but also the virtual representation of at least one additional object. , either by being captured by at least one interior camera of the passenger compartment and / or of the virtual reality device, either by determining that its representation is convenient due to some functionality, or by improving the user's vehicle experience. Thus, the content of this layer of the additional object is not lost or eliminated, but is suitably represented, superimposed on the rest of the layers of the interior and exterior of the vehicle. Thus, the at least one segment of the at least one image classified as an additional object of the vehicle will be duly represented in the virtual environment, prioritizing its representation in the virtual environment. It is important that the generation of the virtual environment does not eliminate segments classified as an additional object of the vehicle, since it could create inappropriate situations, such as eliminating in the virtual environment generated a driver's arm while manipulating a vehicle actuator, eliminating a passenger that is being talked about, eliminate some coins when the user is paying a toll ...
[0094] According to another aspect of the invention, modifying the at least one additional object segment of the vehicle comprises altering a color, a texture and / or a shape of the at least one segment, so that a design of the additional object is altered in the virtual environment shown by means of the virtual reality device. In this way, the virtual environment represented by the virtual reality device can present the information in an enhanced way, or with additional functionalities. A transparency of the additional object can be created in the generated virtual environment, in order to enhance or give more importance to a segment classified as inside the vehicle or outside of the vehicle, but its representation in the virtual environment is not eliminated.
[0095]
[0096] Advantageously, the step of generating a virtual environment of the user comprises adding at least one additional object in the virtual environment, where the at least one additional object replaces the at least one inner segment of the vehicle and / or at least one outer segment of the vehicle . In this way an intermediate virtual object is created between the image corresponding to the interior of the vehicle or passenger compartment, or to its exterior, and the user, while superimposing said layer on those corresponding to the parts of the interior and exterior image.
[0097]
[0098] From the above, additional advantageous effects are derived, such as being able to represent, for example, virtual passengers seated and correctly oriented, in order to accompany the user; represent, for example, a virtual navigator, that in addition to giving navigation instructions, talk about other topics, or read the news of that day; represent virtually, for example, the contact with which you are having a telephone conversation or who has sent a message; or represent, for example, a character in the cartoons to distract some underage occupants located in the rear seats of the vehicle. Another possible advantageous effect would be to modify the appearance of real passengers, using virtual representations, as an element of entertainment.
[0099]
[0100] On the other hand, the step of modifying the at least one interior segment of the vehicle comprises altering a color, a texture and / or a shape of the at least one segment, so that a design of a vehicle interior is altered in the virtual environment shown by means of the virtual reality device. In this way, the method makes it possible, in addition to changing the appearance of the virtual environment, to include additional virtual elements, such as a screen that does not really exist in the passenger compartment, or any other type of virtual actuators. Additionally, modifying the at least one interior segment of the vehicle allows to alter the complete design of the passenger compartment of the vehicle in a cheap and effective way, without costs for the user. If the user prefers a sportier interior or a more technological interior ... just load this requested interior. The generated virtual environment will incorporate the interior of the vehicle desired by the user, replacing the segments of the at least one image classified as interior of the vehicle.
[0101]
[0102] According to another aspect of the invention, the method comprises a further step of capturing at least one external image of an environment of the vehicle, and wherein modifying the at least one inner segment of the vehicle comprises replacing the at least one inner segment of the at least one image by at least one section of the exterior image of the environment of the vehicle, wherein the at least one section of the exterior image coincides with the field of vision of the user occupied by the at least one substituted segment. In this way the method allows at least one interior area of the vehicle to become transparent, thereby showing the outer area of the vehicle. This external image may have been captured by an outer camera of the wide-angle type. Additionally, the degree of transparency can be modified, with an interior segment partially or totally transparent depending on an accident risk, certain driving conditions.
[0103]
[0104] Advantageously, modifying the at least one outer segment of the vehicle comprises altering a brightness, a color, a brightness, a contrast and / or a saturation of the at least one segment, so that a visualization of the exterior of the vehicle is altered in the virtual environment shown by means of the virtual reality device. In this way the method makes it possible to modify the conditions of luminosity and visibility, eventually improving the real conditions.
[0105] Additionally, the method comprises a further step of capturing at least one second image by means of at least one second camera, wherein the at least one second camera comprises night vision and / or a wide-angle lens, and where to modify the at least one segment The exterior of the vehicle comprises replacing the at least one segment of the at least one image by at least one section of the second image captured by the at least one second camera, wherein the at least one section of the second image captured coincides with the field of view of the user occupied by the at least one substituted segment. In this way, the method allows to integrate in the virtual environment represented by the virtual reality device, at least one image, either of night vision, or of a thermal, infrared, or light amplifier camera, with which the images are enlarged. benefits of the image of the virtual environment shown, favoring the visualization by the user of the environment through which it circulates.
[0106]
[0107] These advantages of modifying the appearance of the exterior, improving the conditions of visibility can be specified in any of the following possibilities:
[0108] - increase the brightness of an environment with low lighting and, alternatively, decrease the brightness of a bright environment, avoiding glare;
[0109] - represent an image of night vision, infrared or thermal, an overlap of several of the previous ones, or of some of the previous ones overlapped with the real image;
[0110] - modify the weather conditions of the exterior, visually in the virtual environment generated. For example, the user can define a sunny virtual environment despite being cloudy, which implies advantages of visibility and mood;
[0111] - improve the user's vision using cameras with a greater angle or with zooms or enlargement of the image, which is especially advantageous for people with vision problems;
[0112]
[0113] It is worth mentioning that the step of generating a virtual environment for the user comprises incorporating at least one additional virtual information, so that the user has accessible information by means of images in a more visual way, enabling an understanding in a more immediate way. Thus, information relating to the vehicle can be represented in the virtual environment, such as driving variables, GPS navigation data, various information in the windshield area so that the user can visualize said information without taking his eyes off the road. In addition, you can also represent augmented reality information, signaling the position of points of interest along the route of the vehicle, or other additional information.
[0114]
[0115] On the other hand, the object of the present invention is a system for displaying virtual reality information in a vehicle, where the system comprises:
[0116] - a virtual reality device, where the virtual reality device comprises at least one screen,
[0117] - at least one first camera, wherein the at least one first camera is configured to capture at least one image, where the at least one image coincides with a user's field of view, and
[0118] - at least one processing unit, wherein the at least one processing unit is configured to classify the at least one segment of the less an image inside the vehicle, outside the vehicle and additional object of the vehicle, and wherein the at least one processing unit is configured to modify the at least one segment of the at least one image based on the classification of at least one segment, and wherein the at least one processing unit is configured to generate a virtual user environment, where the virtual environment comprises the at least one virtual segment, where the at least one virtual segment generated is based on the modified segment.
[0119]
[0120] In this way the system is able to classify the parts or segments of the at least one received image. In detail, the at least one first camera is placed in the user's point of view so that it can modify this information to offer functionalities, also presenting the advantages related to the described method.
[0121]
[0122] The virtual reality device is capable of being carried by the head of a user of the vehicle, thereby making it easier for the virtual reality device to capture the user's field of vision through the at least one first camera.
[0123]
[0124] Advantageously, the system comprises at least one position detection means, wherein the position detecting means is configured to determine a position of the virtual reality device in the vehicle.
[0125]
[0126] In a preferred embodiment of the invention, the system comprises at least one distance sensor, wherein the at least one distance sensor is configured to obtain at least one real vector, wherein the at least one real vector comprises a module and an address between the virtual reality device and the at least one element of the user's environment. In this way it is possible to determine the distance between the virtual reality device and the physical environment of the vehicle. Thus, a lidar type sensor (Laser Imaging Detection and Ranging) can be arranged adjacent to the at least one first camera, so that the at least one segment of the image is paired with a real vector.
[0127]
[0128] According to another aspect of the invention, the system comprises at least one memory unit, wherein the at least one memory unit comprises an arrangement of the interior of the vehicle, so that at least one theoretical vector is calculated based on an arrangement of the passenger compartment of the vehicle, preferably 3D and previously known, and the position of the virtual reality device in the vehicle.
[0129] Note that, therefore, the arrangement of the interior of the vehicle is previously known in a preferred embodiment. This means knowing the plane or map of the interior of the vehicle and, therefore, the theoretical distances of each object in the environment according to the position occupied by the virtual reality device inside the vehicle.
[0130]
[0131] According to another aspect of the invention, the system comprises at least one detector element, wherein the at least one detector element is configured to detect the position of the additional objects. In order to determine more precisely the distances between the virtual reality device and the environment, at least one additional detector can be placed in order to detect the exact position of the mobile objects in the passenger compartment, such as the steering wheel, glove box, gear lever of marches, the visor of the parasol .... In this way it is achieved that the information obtained by the distance sensor coincides with the theoretical vector. Otherwise, a segment that is actually part of the interior of the vehicle would be classified as an additional object of the vehicle. Thus, the virtual image that is projected on the virtual reality device coincides with the real one and the user can interact with the vehicle in an appropriate way. Alternatively, the detection of the position of the at least one mobile element can be realized by means of image recognition.
[0132]
[0133] Additionally, the system comprises at least a second camera, wherein the at least one second camera comprises night vision and / or a wide-angle lens. Said at least one second camera can be oriented towards the outside of the vehicle, or coincide with the user's field of vision.
[0134]
[0135] The system may additionally comprise at least one third camera facing the outside of the vehicle, wherein the at least one third camera is configured to capture at least one external image of the vehicle. Said third camera can be specifically assigned to capture external images of the vehicle, being able to project the information captured by the third camera in the virtual reality device.
[0136]
[0137] The accompanying drawings show, by way of non-limiting example, a method and system for displaying virtual reality information in a vehicle, constituted in accordance with the invention. Other features and advantages of said method and system for displaying virtual reality information in a vehicle, object of the present invention, will result evident from the description of a preferred embodiment, but not exclusive, which is illustrated by way of non-limiting example in the accompanying drawings, in which:
[0138]
[0139] BRIEF DESCRIPTION OF THE DRAWINGS
[0140]
[0141] Figure 1 .- It is a perspective view of the interior of a vehicle, according to the present invention.
[0142] Figure 2.- It is a first-person view of the field of vision of a user from the position of the driver in the interior of a vehicle, according to the present invention.
[0143] Figure 3.- It is a perspective view of a virtual reality device, according to the present invention.
[0144] Figure 4A.- It is a perspective view of a virtual reality device in a first position, according to the present invention.
[0145] Figure 4B.- It is a perspective view of a virtual reality device in a second position, according to the present invention.
[0146] Figure 5A.- It is a perspective view of the first row of seats of the passenger compartment of a vehicle with two users carrying their respective virtual reality devices, according to the present invention.
[0147] Figure 5B.- It is a perspective view of the field of vision of the driver in the interior of a vehicle, according to the present invention.
[0148] Figure 6A.- It is a perspective view of the virtual environment observed by the driver in the vehicle interior through the virtual reality device, with a first design of the instrument panel, according to the present invention.
[0149] Figure 6B.- It is a perspective view of the virtual environment observed by the driver in the passenger compartment of a vehicle through the virtual reality device, with a second design of the instrument panel, according to the present invention.
[0150] Figure 7A.- It is a perspective view of the virtual environment, with a night view of the exterior, which the driver observes in the passenger compartment of a vehicle through the virtual reality device, according to the present invention.
[0151] Figure 7B.- It is a perspective view of the virtual environment, with a side view of the exterior, through a transparency of the side door of the vehicle, which observes the driver in the passenger compartment of a vehicle through the virtual reality device, in accordance with the present invention.
[0152] Figure 7C.- It is a perspective view of the virtual environment, with a view of a virtual character, who would observe the passenger of a vehicle through the virtual reality device, according to the present invention.
[0153] Figure 8.- It is a block diagram of the elements that make up the system of the present invention.
[0154]
[0155] DESCRIPTION OF A PREFERRED EMBODIMENT
[0156]
[0157] In view of the mentioned figures and, according to the numbering adopted, one can observe in them an example of a preferred embodiment of the invention, which comprises the parts and elements that are indicated and described in detail below.
[0158]
[0159] As can be seen in Figure 5A, the system and method of the present invention is based on projecting virtual reality information by means of a virtual reality device 3. The virtual reality device 3 is preferably arranged on the head 21 of the user 2, both of a driver and a passenger when they are inside a vehicle 1.
[0160]
[0161] By way of summary, the method of the present invention performs the following actions, in a preferred embodiment:
[0162] - capturing at least one image 5 by means of at least one first camera 31 that coincide with the field of view 22 of the user 2;
[0163] - measuring at least one separation distance between the user 2 and each pixel, segment or area of the image 5 captured, so that real distances are obtained between the user 2 and the environment;
[0164] - positioning the virtual reality device 3 or virtual reality glasses inside the vehicle 1;
[0165] - comparing a real distance obtained in the previous measurements with a theoretical distance according to the known 3D or three-dimensional arrangement of the vehicle 1;
[0166] - classifying the pixels of the image 5 captured in different layers, depending on whether they are first of the exterior 12, that is, behind the glass of the interior of the vehicle 1, second, of the interior or interior 11, where the actual distance it coincides with the theoretical distance according to the 3D arrangement, or, thirdly, of an object outside the passenger compartment or additional object 13 but located within it, where the actual distance is less than the theoretical distance according to the 3D arrangement;
[0167] modifying the captured image 5, so that new information is displayed, in at least one of the previously classified layers, either by modifying the pixels corresponding to the layer of the exterior 12 of the passenger compartment, of the interior 11 of the passenger compartment, and / or of an additional object 13 present in the interior of the vehicle 1.
[0168]
[0169] As for the system of the invention, in summary, it comprises the following elements in a preferred embodiment:
[0170] - virtual reality device 3 for representing a virtual reality information to the user 2;
[0171] - camera 31 for capturing the environment and the direction of vision of the user 2;
[0172] - system to position the virtual reality device 3 in the environment;
[0173]
[0174] Figure 1 shows, illustratively, the interior of a vehicle 1, with a processing unit 4 and a memory unit 41, preferably located under the dashboard. A plurality of position detection means 7 of the virtual reality device 3 are also observed in the vehicle 1. An example for positioning the virtual reality device 3 is by means of transceivers, for example of infrared or electromagnetic waves. Thus, by means of a triangulation process and knowing the time of emission and response of said waves with devices located in known locations of the vehicle, its position can be determined precisely.
[0175]
[0176] Figure 2 shows, in an illustrative way, a front view of the position of the driver in the interior of a vehicle 1, according to the present invention. In the same can be seen the position of the processing unit 4, and the memory unit 41. You can also see the different areas: interior 11, exterior 12 and window area 15; in which the field of vision of a user 2 positioned in the driver's seat is divided. This could be a virtual reality information displayed to the driver of the vehicle, where the segments 52 of the image 5 classified as additional objects 13 of the vehicle 1 have been eliminated, such as, for example, the driver's own body. Thus, the problem that the elimination of said layer or segments 52 can suppose, since the user does not know where his hands are, in order to be able to interact with the vehicle 1.
[0177] Figure 3 shows, in an illustrative way, a perspective view of a virtual reality device 3, according to the present invention. Said virtual reality device 3 is preferably virtual reality glasses. Virtual reality glasses preferably comprise a first camera 31 for capturing the at least one image 5, at least one distance sensor 9, where the at least one distance sensor 9 is configured to obtain at least one distance between the user 2 and the objects of the environment, an accelerometer 34 and a gyroscope 35 in order to determine a position of the virtual reality device 3 in the vehicle 1, as well as a processing unit 4. Thus, the virtual reality device 3 knows where is positioned, and knows the distance to each point of the interior of the vehicle 1.
[0178]
[0179] Additionally, the virtual reality device 3, or virtual reality glasses, comprises any of the following elements: high resolution camera, high quality autofocus systems, night and / or infrared camera, systems capable of determining the distance to all objects in image 5 (eg a Lidar).
[0180]
[0181] The system of the invention has a processing unit 4 that can be located both in the vehicle 1 and in the virtual reality device 3. In a preferred embodiment, the processing unit 4 is integrated into the virtual reality device 3, preferably in the virtual reality glasses, so that the user 2 will be able to leave the vehicle 1 and his glasses will have the intelligence to continue on the exterior 12 of the vehicle 1 with functionalities similar to those that he had in the interior 11 of the vehicle 1. In the embodiment of the invention corresponding to the fact that the processing unit 4 is in the vehicle 1, as shown in figure 1, it sends to the virtual reality device 3 only the information that it should show. In this case it is the processing unit 4 of the vehicle that would carry out the processing work. In this case, the transmission means between the processing unit 4 of the vehicle 1 and the virtual reality device 3 are of high efficiency and speed so that the user 2 does not perceive a gap between what he sees and reality.
[0182]
[0183] Figure 4A shows, illustratively, a virtual reality device 3 in a first position, which corresponds to a top view of the virtual reality glasses. Figure 4B shows, illustratively, a virtual reality device 3 in a second position, corresponding to a side view of the virtual reality glasses. In order to position the virtual reality device 3 in the environment, it is possible to establish marks or beacons on the glasses that serve as reference points. In figure 4A the marks are arranged in the upper area of the frame. In figure 4B the marks are arranged in the lateral zone of the rods. By means of cameras arranged inside the vehicle, the position and orientation of said marks is determined, thus positioning the virtual reality device 3.
[0184]
[0185] More particularly, as can be seen in figures 2, 5A, 5B, 6A, 6B, 7A, 7B, 7C, the method for displaying virtual reality information in a vehicle 1 comprises the steps: i) capturing at least an image 5 of a user's environment 2, wherein the at least one image 5 captured coincides with a field of view 22 of the user 2, wherein the at least one image 5 comprises at least one segment 52,
[0186] ii) classifying the at least one segment 52 of the at least one image 5 in the interior 11 of the vehicle 1, in the exterior 12 of the vehicle 1, and in an additional object 13 of the vehicle 1, iii) modifying the at least one segment 52 based on the classification of the at least one segment 52,
[0187] iv) generating a virtual environment of the user 2, wherein the virtual environment comprises at least one virtual segment 6, where the at least one virtual segment 6 generated is based on the modified segment 52, and
[0188] v) show the virtual environment generated by means of the virtual reality device 3.
[0189]
[0190] According to another aspect of the invention, the step of classifying the at least one segment 52 is based on a location of at least one element of the environment of the user 2 with respect to the virtual reality device 3, where the method additionally comprises the steps of: a) obtaining at least one real vector 23, wherein the at least one real vector 23 comprises a module and an address between the virtual reality device 3 and the at least one element of the environment of the user 2,
[0191] b) determining a position of the virtual reality device 3 in vehicle 1,
[0192] c) assigning the at least one real vector 23 to the at least one segment 52, and
[0193] d) comparing the at least one real vector 23 with at least one theoretical vector 24, where the at least one theoretical vector 24 is previously known.
[0194]
[0195] In figure 5A one can observe, illustratively, a vehicle 1 with two users 2 carrying their respective virtual reality devices 3. Both carry on their head 21 virtual reality glasses. It is observed schematically how the virtual reality device captures images 5 of the environment, coinciding with the field of view 22 of the user 2.
[0196] Furthermore, at least one distance sensor 9 captures at least one module and one direction between the virtual reality device 3 and the elements that are in the environment of the user 2, so that a plurality of real vectors 23 are defined. Each segment 52 of the image 5 has associated at least one real vector 23, so that a relative position of the at least one segment 52 with the user 2 is known.
[0197]
[0198] Additionally, at least one position detection means 7 allows knowing the position of the virtual reality device 3 inside the vehicle 1. Knowing the position is essential to locate the user 2 in a previously known three-dimensional map of the vehicle 1. From this Thus, a plurality of theoretical vectors 24 will be known, indicating the relative position between the objects located in the environment of the user 2 and the user 2. A comparison between the plurality of theoretical vectors 24 and the plurality of real vectors 23 will allow to classify the plurality of segments 52 or additional objects of the image 5, thus being able to generate a virtual environment modified to the specific needs of the user 2.
[0199]
[0200] In order to determine the segments 52 of the image representing the exterior 12 of the vehicle 1, at least one window area 15 is determined in the at least one image 5 captured. It is based on recognizing at least one predefined geometric shape by means of image processing and / or determining at least one marker of the at least one image 5, wherein the marker comprises a predefined color and / or comparing the at least one real vector 23 with the at least one theoretical vector 24. Note that the window area 15 corresponds to the windshield, or to any glazed or transparent surface of the vehicle 1. Thus, at least one segment 52 is classified as exterior 12 of the vehicle 1 if the at least one segment 52 is arranged in the at least one window area 15 in the at least one image 5.
[0201]
[0202] The image 5B shows a first virtual environment of the generated user 2. It is emphasized that this first virtual environment does not present any modification with respect to the real environment of the user 2. Thus, a field of vision of a user 2 driver can be observed in the interior of a vehicle 1, where the different areas of the field can be seen of vision 22, classified as pixels or segments 52 corresponding to the interior 11 of the vehicle 1, or interior, pixels corresponding to additional objects 13 not belonging to the interior 11 of the vehicle 1, in this case the hands of the driver 2 user, or interior , Y pixels corresponding to the exterior 12 of the vehicle 1, that is, to the part of the image 5 that is on the window.
[0203]
[0204] Specify that the present invention classifies the pixels of the image 5 according to at least the following layers of the exterior 12, interior 11 and additional object 13. The exterior 12 corresponds to the captured pixels that are positioned in the window or window area 15 . Therefore, everything that is captured and, according to the 3D arrangement, indicates that it is crystal or the window area 15, is equivalent to the exterior 12, as long as the distance of that pixel is equal to the real distance. As for the interior 11 of the vehicle 1, or passenger compartment, the actual distance must coincide with the theoretical distance according to the 3D arrangement. As for the additional object 13, outside the interior 11 of the vehicle 1, or passenger compartment, the actual distance must be smaller than the theoretical distance according to the 3D arrangement.
[0205]
[0206] It is indicated that an image 5 captured by the first camera 31 comprises a plurality of segments 52. By segments is meant any subdivision of the image 5, either by volumes, colors, shapes or even pixels, the subdivision of the image being carried out. preferably by pixels. Each segment 52 or pixel comprises an associated real vector 23. The comparison of the real vector 23 with the theoretical vector 24 makes it possible to classify the segment 52 in the layers explained above. Depending on the needs of the user, a virtual environment to be displayed will be generated by means of the virtual reality device 3, where this virtual environment will comprise a plurality of virtual segments 6. The equivalence between segment 52 and virtual segment 6 is highlighted, where the segment virtual 6 will be equivalent to the pixel of the image 5, and may have been modified depending on the virtual environment generated and the needs of the user 2. The following images show examples of virtual reality information represented to a user 2 in a vehicle 1, in order to improve and show some advantages of the present invention.
[0207]
[0208] Figure 6A can be seen, illustratively, a first design of the dashboard that observes the driver in the interior of a vehicle 1 through the device of virtual reality 3. In this first design is observed a first aesthetic solution of the of the odometer, the speedometer and the various indicators on the dashboard.
[0209]
[0210] Figure 6B shows, illustratively, a second instrument panel design, which is observed by the driver in the interior of a vehicle 1 through the vehicle. virtual reality device 3. In this second design a second aesthetic solution of the steering wheel, the odometer, the speedometer and the various indicators present in the dashboard is observed, the second design being different from the first design. In this way, without the need to alter the physical components of the interior of the vehicle, the user 2 can visualize an interior 11 of the vehicle 1 with totally different designs and appearances, modifying a color, texture and / or shape of the pixels or segments. classified as interior 11 of the vehicle 1. It is understood that the architecture can not be modified since, otherwise, the comparison of the real vector 23 with the theoretical vector 24 would not be correct. The present invention allows to generate, in the same architecture, different designs, create new displays or screens on the current architecture ...
[0211]
[0212] It is noted that in both Figures 6A and 6B, the virtual environment represented comprises the hands of the driver user 2, through various virtual segments 6. The segments 52 classified as additional object 13 of the vehicle 1, as in this case the hands of the user 2, are superimposed on the virtual segment 6 corresponding to a segment classified as interior 11 of vehicle 1 or classified as exterior 12 of vehicle 1. Otherwise, user 2 could not interact with interior 11 of the vehicle, not knowing where it positions hands, what active actuator. Assuming a safety problem while driving.
[0213]
[0214] Figure 7A shows, illustratively, a night view of the exterior 12, which the driver observes in the passenger compartment of a vehicle 1 through the virtual reality device 3. The virtual environment represented comprises a pair of pedestrians and a dog, whose silhouettes appear highlighted with a greater luminosity on the dark background, through various virtual segments 6. Thus, a brightness, color, luminosity, contrast and / or saturation of at least one pixel or segment 52 have been modified. The generated virtual segment 6 will allow a better visibility of the exterior 12 of the vehicle 1 than if the user visualizes the original segment 52.
[0215]
[0216] In figure 7B it is possible to observe, in an illustrative way, a view of the field of vision 22 of a driver of the vehicle 1, when he directs his gaze to the left, that is, looking towards the side door of the vehicle 1. In this case , a second camera 36 facing outwards 12 of the vehicle 1, captures an exterior image 8. In this specific case, the exterior image 8 captures a cyclist that is close to the vehicle 1. Thus replaces the at least one segment 52 of the at least one image 5 classified as interior 11 of the vehicle 1, in this particular case the pixels corresponding to a lining of the vehicle door 1, by at least one section of the exterior image 8 captured by the at least one second camera 36, allowing the user 2 to visualize the cyclist.
[0217]
[0218] Thus, with the information captured by other cameras 14, 36, in particular night vision or external cameras, or a camera oriented so that it allows capturing what remains behind the user 2, in its dead angle. Additionally, the at least one segment 52 of the image 5 classified as interior 11 of the vehicle 1 is modified by creating at least one virtual segment 6 that applies a degree of transparency with respect to the at least one original segment 52. This embodiment makes it possible to make the area of the pillar, or upright A, or side of the front windshield transparent, and to project in its place the information captured by the other cameras 14, 36, in particular an exterior one.
[0219]
[0220] In figure 7C it is possible to observe, in an illustrative way, a perspective view of the virtual environment, with a view of a virtual character, which would be observed by the rear passenger of a vehicle 1 through the virtual reality device 3, in accordance with the present invention Thus, the represented virtual environment comprises adding at least one additional object 13, in this case the animated character, in the virtual environment generated. Furthermore, it is noted that another passenger of the vehicle is also classified as an additional object 13 of the vehicle. Both additional objects 13 of the vehicle 1 are superimposed on the at least one segment 52 classified as exterior 12 of the vehicle and the at least one segment 52 classified as interior 11 of the vehicle. Additionally, a color, a texture and / or a shape of the at least one segment 52 classified as additional object 13 of the vehicle 1 can be altered, so that the virtual segment 6 represented in the virtual environment comprises a different design or appearance to the segment 52 original.
[0221]
[0222] Figure 8 shows, illustratively, a block diagram of the elements that make up the system of the present invention, in which it is possible to appreciate the various interconnections between the virtual reality device 3, the accelerometer 34, the gyroscope 35, the first chamber 31, the second chamber 14, the third chamber 36, the processing unit 4, the memory unit 41, the position detection means 7 and the distance sensor 9.
[0223] Thus, it is also object of the present invention, as can be seen in figures 2, 5A, 5B, 6A, 6B, 7A, 7B, 7C and 8, a system for displaying virtual reality information in a vehicle 1, wherein the system comprises a virtual reality device 3, wherein the virtual reality device 3 comprises at least one screen, at least one first camera 31, wherein the at least one first camera 31 is configured to capture at least one image 5, where the at least one image 5 coincides with a field of view 22 of the user 2, and at least one processing unit 4, wherein the at least one processing unit 4 is configured to classify the at least one segment 52 of the at least one 5 inside 11 of vehicle 1, outside 12 of vehicle 1 and additional object 13 of vehicle 1, and where the at least one processing unit 4 is configured to modify the at least one segment 52 of the at least one image 5 in based on the classification of at least one segment 52, and wherein the at least one processing unit 4 is configured to generate a virtual environment of the user 2, wherein the virtual environment comprises the at least one virtual segment 6, wherein the at least one virtual segment 6 generated is based on modified segment 52.
[0224]
[0225] Note that the virtual reality device 3 is, preferably, virtual reality glasses. The first camera 31 is, preferably, high resolution and can comprise autofocus means.
[0226]
[0227] Additionally, the system of the invention comprises at least one position detection means 7, wherein the position detection means 7 is configured to determine a position of the virtual reality device 3 in the vehicle 1. Furthermore, the system comprises at least a distance sensor 9, wherein the at least one distance sensor 9 is configured to obtain at least one real vector 23, wherein the at least one real vector 23 comprises a module and an address between the virtual reality device 3 and the less an element of the environment of the user 2. As an example of distance sensor 9 would be a Lidar, that is, a device that allows to determine the distance from a laser emitter to an object or surface using a pulsed laser beam.
[0228]
[0229] Additionally, the system comprises at least one memory unit 41, wherein the at least one memory unit 41 comprises an arrangement of the interior 11 of the vehicle 1, so that at least one theoretical vector 24 is calculated based on a compartment arrangement of the vehicle 1 and the position of the virtual reality device 3 in the vehicle 1.
[0230] The details, shapes, dimensions and other accessory elements, as well as the components used in the implementation of the method and system to display virtual reality information in a vehicle, may be conveniently replaced by others that are technically equivalent, and do not depart of the essentiality of the invention or of the scope defined by the claims that are included below the following list.
[0231]
[0232] List of references:
[0233]
[0234] 1 vehicle
[0235] 11 interior
[0236] 12 outside
[0237] 13 additional object
[0238] 14 second camera
[0239] 15 window area
[0240] 2 user
[0241] 21 head
[0242] 22 point of view
[0243] 23 real vector
[0244] 24 theoretical vector
[0245] 3 virtual reality device
[0246] 31 first camera
[0247] 32 sensor
[0248] 34 accelerometer
[0249] 35 gyroscope
[0250] 36 third camera
[0251] 4 processing unit
[0252] 41 memory unit
[0253] 5 image
[0254] 52 segment
[0255] 6 virtual segment
[0256] 7 position detection means
[0257] 8 exterior image
[0258] 9 distance sensor
权利要求:
Claims (1)
[0001]
1- Method for displaying virtual reality information in a vehicle (1), where a virtual reality device (3) is used by a user (2) inside the vehicle (1), where the method comprises the steps of:
i) capturing at least one image (5) of a user's environment (2), where the at least one image (5) captured coincides with a field of view (22) of the user (2), where the at least one image (5) comprises at least one segment (52),
ii) classify the at least one segment (52) of the at least one image (5) in:
- interior (11) of the vehicle (1),
- exterior (12) of the vehicle (1), and
- additional object (13) of the vehicle (1),
iii) modifying the at least one segment (52) based on the classification of the at least one segment (52),
iv) generating a virtual user environment (2), wherein the virtual environment comprises at least one virtual segment (6), where the at least one virtual segment (6) generated is based on the modified segment (52), and
v) show the virtual environment generated by means of the virtual reality device (3).
2- The method according to claim 1, wherein the step of classifying the at least one segment (52) is based on a location of at least one element of the user's environment (2) with respect to the virtual reality device (3), where the method further comprises the steps of:
a) obtaining at least one real vector (23), wherein the at least one real vector (23) comprises a module and an address between the virtual reality device (3) and the at least one element of the user's environment (2) ,
b) determining a position of the virtual reality device (3) in the vehicle (1), c) assigning the at least one real vector (23) to the at least one segment (52), and
d) comparing the at least one real vector (23) with at least one theoretical vector (24), where the at least one theoretical vector (24) is previously known.
3- Method according to claim 2, wherein the at least one theoretical vector (24) is based on an arrangement of the vehicle interior (1) and the position of the virtual reality device (3) in the vehicle (1), where the arrangement of the vehicle interior (1) is previously known.
The method according to claim 2, comprising a further step of determining at least one window area (15) in the at least one image (5) captured, wherein the step of determining the at least one window area (15) comprises recognizing at least one predefined geometric shape by means of image processing and / or determining at least one marker of the at least one image (5), wherein the marker comprises a predefined color and / or comparing the at least one real vector (23) ) with the at least one theoretical vector (24).
The method according to claim 4, wherein the at least one segment (52) is classified as outside (12) of the vehicle (1) if the at least one segment (52) is arranged in the at least one window area (15). ) in the at least one image (5).
6- Method according to claim 2, wherein the at least one segment (52) is classified as internal (11) of the vehicle (1) if the real vector (23) of the at least one segment (52) is substantially the same as the vector theoretical (24).
The method according to claim 2, wherein the at least one segment (52) is classified as an additional object (13) of the vehicle (1) if the module of the real vector (23) of the at least one segment (52) is smaller than the theoretical vector module (24).
8 - Method according to claim 1, wherein the step of generating a virtual environment comprises superimposing the at least one virtual segment (6) corresponding to the at least one segment (52) of additional object (13) of the vehicle (1) to at least an inner segment (52) of the vehicle (1), and comprises overlaying the at least one virtual segment (6) corresponding to at least one additional object segment (52) of the vehicle (1) to at least one an outer segment (52) (12) of the vehicle (1).
9- Method according to claim 8, wherein modifying the at least one additional object segment (52) (13) of the vehicle (1) comprises altering a color, texture and / or shape of the at least one segment (52), so that a design of the additional object (13) is altered in the virtual environment shown by means of the virtual reality device (3).
Method according to claim 8, wherein the step of generating a virtual environment of the user (2) comprises adding at least one additional object (13) in the virtual environment, where the at least one additional object (13) replaces the at least one inner segment (52) of the vehicle (1) and / or the at least one outer segment (52) of the vehicle (1).
11. Method according to any of the preceding claims, wherein modifying the at least one inner segment (52) of the vehicle (1) comprises altering a color, a texture and / or a shape of the at least one segment (52), so that a design of a vehicle interior (1) is altered in the virtual environment shown by means of the virtual reality device (3).
12- Method according to any of the preceding claims, comprising a further step of capturing at least one external image (8) of an environment of the vehicle (1), and wherein modifying the at least one inner segment (52) (11) of the vehicle (1) comprises replacing the at least one inner segment (52) (11) of the at least one image (5) by at least one section of the outer image (8) of the vehicle environment (1), where the less a section of the outer image (8) coincides with the field of view (22) of the user (2) occupied by the at least one segment (52) substituted.
13- Method according to any of the preceding claims, wherein modifying the at least one outer segment (52) of the vehicle (1) comprises altering a brightness, a color, a brightness, a contrast and / or a saturation of at least a segment (52), so that a visualization of the exterior of the vehicle (1) is altered in the virtual environment shown by means of the virtual reality device (13).
Method according to any of the preceding claims, comprising a further step of capturing at least one second image by means of at least one second camera (14), wherein the at least one second camera (14) comprises night vision and / or a wide-angle lens, and wherein modifying the at least one outer segment (52) of the vehicle (1) comprises replacing the at least one segment (52) of the at least one image (5) by at least one section of the second image captured by the at least one second camera (14), wherein the at least one section of the second image captured coincides with the field of view (22) of the user (2) occupied by the at least one segment (52) replaced.
Method according to any of the preceding claims, characterized in that the step of generating a virtual environment of the user (2) comprises incorporating at least one additional virtual information.
16- System for displaying virtual reality information in a vehicle (1), where the system comprises:
- a virtual reality device (3), wherein the virtual reality device (3) comprises at least one screen,
- at least one first camera (31), wherein the at least one first camera (31) is configured to capture at least one image (5), where the at least one image (5) coincides with a field of view (22) of the user (2), and
- at least one processing unit (4), wherein the at least one processing unit (4) is configured to classify the at least one segment (52) of the at least one image (5) inside the vehicle (11) (1), exterior (12) of the vehicle (1) and additional object (13) of the vehicle (1), and where the at least one processing unit (4) is configured to modify the at least one segment (52) of the at least one image (5) based on the classification of the at least one segment (52), and where the at least one processing unit (4) is configured to generate a virtual user environment (2), where the environment virtual comprises the at least one virtual segment (6), wherein the at least one virtual segment (6) generated is based on the modified segment (52).
17- System according to claim 16, characterized in that it comprises at least one position detection means (7), where the position detection means (7) is configured to determine a position of the virtual reality device (3) in the vehicle (one).
18- System according to any of claims 16 or 17, characterized in that it comprises at least one distance sensor (9), where the at least one distance sensor (9) is configured to obtain at least one real vector (23), where the at least one real vector (23) comprises a module and an address between the virtual reality device (3) and the at least one element of the user's environment (2)
19 - System according to any of claims 16 to 18, characterized in that it comprises at least one memory unit (41), wherein the at least one memory unit (41) comprises an arrangement of the interior (11) of the vehicle (1), so that at least one theoretical vector (24) is calculated based on an arrangement of the vehicle interior (1) and the position of the virtual reality device (3) on the vehicle (1).
类似技术:
公开号 | 公开日 | 专利标题
JP6717856B2|2020-07-08|Head up display device
US9738224B2|2017-08-22|Vehicle vision system
CN104163133B|2018-05-01|Use the rear view camera system of position of rear view mirror
US8536995B2|2013-09-17|Information display apparatus and information display method
US9218057B2|2015-12-22|Vehicular display system
ES2637173T3|2017-10-11|Vision system for vehicles, in particular commercial vehicles
ES2660994T3|2018-03-27|Display device for vehicles, in particular light commercial vehicles
US20190016256A1|2019-01-17|Projection of a Pre-Definable Light Pattern
WO2014130049A1|2014-08-28|Systems and methods for augmented rear-view displays
ES2704327B2|2020-02-21|Method and system to display virtual reality information in a vehicle
US20190100145A1|2019-04-04|Three-dimensional image driving assistance device
CN111433067A|2020-07-17|Head-up display device and display control method thereof
ES2704373B2|2020-05-29|Method and system to display virtual reality information in a vehicle
JP6445607B2|2018-12-26|Vehicle display system and method for controlling vehicle display system
US20200249044A1|2020-08-06|Superimposed-image display device and computer program
JP6213300B2|2017-10-18|Vehicle display device
CN108725321A|2018-11-02|Information presentation device
JPWO2018030320A1|2019-06-13|Vehicle display device
ES2704350B2|2020-03-17|Method and system to display priority information in a vehicle
US20200148112A1|2020-05-14|Driver-assistance device, driver-assistance system, method of assisting driver, and computer readable recording medium
CN110831840A|2020-02-21|Method for assisting a user of a motor vehicle in avoiding an obstacle, driver assistance device and motor vehicle
JP2017212480A|2017-11-30|Electronic mirror device
ES2770601T3|2020-07-02|Driving assistance system for a vehicle, railway vehicle and associated use procedure
JP6947873B2|2021-10-13|AR display device, AR display method, and program
JP2021154888A|2021-10-07|Display device for vehicle
同族专利:
公开号 | 公开日
EP3456574B1|2021-07-21|
ES2704327B2|2020-02-21|
EP3456574A1|2019-03-20|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题
US5831584A|1995-07-28|1998-11-03|Chrysler Corporation|Hand calibration system and virtual display selection for vehicle simulator|
US20140336876A1|2013-05-10|2014-11-13|Magna Electronics Inc.|Vehicle vision system|
US20160173865A1|2014-12-11|2016-06-16|Hyundai Motor Company|Wearable glasses, control method thereof, and vehicle control system|
WO2016135056A1|2015-02-23|2016-09-01|Jaguar Land Rover Limited|Apparatus and method for displaying information|
WO2017095790A1|2015-12-02|2017-06-08|Osterhout Group, Inc.|Improved safety for a vehicle operator with an hmd|
DE102016008231A1|2016-07-05|2017-02-09|Daimler Ag|Method and device for generating a virtual view from a vehicle|
US7180476B1|1999-06-30|2007-02-20|The Boeing Company|Exterior aircraft vision system using a helmet-mounted display|
JP5108837B2|2009-07-13|2012-12-26|クラリオン株式会社|Vehicle blind spot image display system and vehicle blind spot image display method|
US9550419B2|2014-01-21|2017-01-24|Honda Motor Co., Ltd.|System and method for providing an augmented reality vehicle interface|CN110681110B|2019-10-10|2020-11-06|浙江大学|Method for simulating rowing scene by using vehicle-mounted rowing machine|
US11030818B1|2019-11-19|2021-06-08|Toyota Motor Engineering & Manufacturing North America, Inc.|Systems and methods for presenting virtual-reality information in a vehicular environment|
法律状态:
2019-03-15| BA2A| Patent application published|Ref document number: 2704327 Country of ref document: ES Kind code of ref document: A1 Effective date: 20190315 |
2020-02-21| FG2A| Definitive protection|Ref document number: 2704327 Country of ref document: ES Kind code of ref document: B2 Effective date: 20200221 |
优先权:
申请号 | 申请日 | 专利标题
ES201731119A|ES2704327B2|2017-09-15|2017-09-15|Method and system to display virtual reality information in a vehicle|ES201731119A| ES2704327B2|2017-09-15|2017-09-15|Method and system to display virtual reality information in a vehicle|
EP18194935.5A| EP3456574B1|2017-09-15|2018-09-17|Method and system for displaying virtual reality information in a vehicle|
[返回顶部]